14 research outputs found

    Representations for Cognitive Vision : a Review of Appearance-Based, Spatio-Temporal, and Graph-Based Approaches

    Get PDF
    The emerging discipline of cognitive vision requires a proper representation of visual information including spatial and temporal relationships, scenes, events, semantics and context. This review article summarizes existing representational schemes in computer vision which might be useful for cognitive vision, a and discusses promising future research directions. The various approaches are categorized according to appearance-based, spatio-temporal, and graph-based representations for cognitive vision. While the representation of objects has been covered extensively in computer vision research, both from a reconstruction as well as from a recognition point of view, cognitive vision will also require new ideas how to represent scenes. We introduce new concepts for scene representations and discuss how these might be efficiently implemented in future cognitive vision systems

    Abstract: Sparse 3D Reconstruction of a Room

    No full text
    Nowadays many known ways exist to compute 3D models of office rooms. But all of them have one disadvantage: the long computation time. Within this work, the computation of a sparse 3D model of a room is described which is done in a short time. To do this we extend the stereo reconstruction method on panoramic images, and we calibrate the camera with a method which takes benefits from the image acquisition process, and from the innovative design of our self developed artificial targets. Experimental results show the feasibility of the proposed 3D reconstruction.

    PIRKER ET AL.: GPSLAM 1 GPSlam: Marrying Sparse Geometric and Dense Probabilistic Visual Mapping

    No full text
    We propose a novel, hybrid SLAM system to construct a dense occupancy grid map based on sparse visual features and dense depth information. While previous approaches deemed the occupancy grid usable only in 2D mapping, and in combination with a probabilistic approach, we show that geometric SLAM can produce consistent, robust and dense occupancy information, and maintain it even during erroneous exploration and loop closure. We require only a single hypothesis of the occupancy map and employ a weighted inverse mapping scheme to align it to sparse geometric information. We propose a novel map-update criterion to prevent inconsistencies, and a robust measure to discriminate exploration from localization. Figure 1: Dense occupancy map of a typical indoor environment.

    An omnidirectional time-of-flight camera and its application to indoor SLAM

    No full text
    Abstract—Photonic mixer devices (PMDs) are able to create reliable depth maps of indoor environments. Yet, their application in mobile robotics, especially in simultaneous localization and mapping (SLAM) applications, is hampered by the limited field of view. Enhancing the field of view by optical devices is not trivial, because the active light source and the sensor rays need to be redirected in a defined manner. In this work we propose an omnidirectional PMD sensor which is well suited for indoor SLAM and easy to calibrate. Using a single sensor and multiple planar mirrors, we are able to reliably navigate in indoor environments to create geometrically consistent maps, even on optically difficult surfaces. I
    corecore